1,323 research outputs found

    Charles Bukowski

    Get PDF

    Personalised Learning: Developing a Vygotskian Framework for E-learning

    Get PDF
    Personalisation has emerged as a central feature of recent educational strategies in the UK and abroad. At the heart of this is a vision to empower learners to take more ownership of their learning and develop autonomy. While the introduction of digital technologies is not enough to effect this change, embedding the affordances of new technologies is expected to offer new routes for creating personalised learning environments. The approach is not unique to education, with consumer technologies offering a 'personalised' relationship which is both engaging and dynamic, however the challenge remains for learning providers to capture and transpose this to educational contexts. As learners begin to utilise a range of tools to pursue communicative and collaborative actions, the first part of this paper will use analysis of activity logs to uncover interesting trends for maturing e-learning platforms across over 100 UK learning providers. While personalisation appeals to marketing theories this paper will argue that if learning is to become personalised one must ask what the optimal instruction for any particular learner is? For Vygotsky this is based in the zone of proximal development, a way of understanding the causal-dynamics of development that allow appropriate pedagogical interventions. The second part of this paper will interpret personalised learning as the organising principle for a sense-making framework for e-learning. In this approach personalised learning provides the context for assessing the capabilities of e-learning using Vygotsky’s zone of proximal development as the framework for assessing learner potential and development

    Minimizing Communication for Eigenproblems and the Singular Value Decomposition

    Full text link
    Algorithms have two costs: arithmetic and communication. The latter represents the cost of moving data, either between levels of a memory hierarchy, or between processors over a network. Communication often dominates arithmetic and represents a rapidly increasing proportion of the total cost, so we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds were presented on the amount of communication required for essentially all O(n3)O(n^3)-like algorithms for linear algebra, including eigenvalue problems and the SVD. Conventional algorithms, including those currently implemented in (Sca)LAPACK, perform asymptotically more communication than these lower bounds require. In this paper we present parallel and sequential eigenvalue algorithms (for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms that do attain these lower bounds, and analyze their convergence and communication costs.Comment: 43 pages, 11 figure

    Exploration of Web Technologies: A Real World Application

    Get PDF
    Our team created a web application for a photography studio. In addition to a portfolio for the studio, the application required the ability to manage photographer schedules, handle and organize orders and provide secure user accounts with different access levels for the site

    Communication-optimal Parallel and Sequential Cholesky Decomposition

    Full text link
    Numerical algorithms have two kinds of costs: arithmetic and communication, by which we mean either moving data between levels of a memory hierarchy (in the sequential case) or over a network connecting processors (in the parallel case). Communication costs often dominate arithmetic costs, so it is of interest to design algorithms minimizing communication. In this paper we first extend known lower bounds on the communication cost (both for bandwidth and for latency) of conventional (O(n^3)) matrix multiplication to Cholesky factorization, which is used for solving dense symmetric positive definite linear systems. Second, we compare the costs of various Cholesky decomposition implementations to these lower bounds and identify the algorithms and data structures that attain them. In the sequential case, we consider both the two-level and hierarchical memory models. Combined with prior results in [13, 14, 15], this gives a set of communication-optimal algorithms for O(n^3) implementations of the three basic factorizations of dense linear algebra: LU with pivoting, QR and Cholesky. But it goes beyond this prior work on sequential LU by optimizing communication for any number of levels of memory hierarchy.Comment: 29 pages, 2 tables, 6 figure

    Overnight Stop

    Get PDF

    Spain Travels

    Get PDF

    A Departure

    Get PDF
    • …
    corecore